Goto

Collaborating Authors

 intelligent behavior


Why Experts Can't Agree on Whether AI Has a Mind

TIME - Tech

Why Experts Can't Agree on Whether AI Has a Mind Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. I'm not used to getting nasty emails from a holy man, says Professor Michael Levin, a developmental biologist at Tufts University. Levin was presenting his research to a group of engineers interested in spiritual matters in India, arguing that properties like "mind" and intelligence can be observed even in cellular systems, and that they exist on a spectrum. But when he pushed further--arguing that the same properties emerge everywhere, including in computers--the reception shifted.


Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence

Cai, Borui, Zhao, Yao

arXiv.org Artificial Intelligence

We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM). Unlike existing foundation models (FMs), which specialize in pattern learning within specific domains such as language, vision, or time series, IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors. Vision, language, and other cognitive abilities are manifestations of intelligent behavior; learning from this broad range of behaviors enables the system to internalize the general principles of intelligence. Based on the fact that intelligent behaviors emerge from the collective dynamics of biological neural systems, IFM consists of two core components: a novel network architecture, termed the state neural network, which captures neuron-like dynamic processes, and a new learning objective, neuron output prediction, which trains the system to predict neuronal outputs from collective dynamics. The state neural network emulates the temporal dynamics of biological neurons, allowing the system to store, integrate, and process information over time, while the neuron output prediction objective provides a unified computational principle for learning these structural dynamics from intelligent behaviors. Together, these innovations establish a biologically grounded and computationally scalable foundation for building systems capable of generalization, reasoning, and adaptive learning across domains, representing a step toward truly AGI.


Normality and the Turing Test

Kabbach, Alexandre

arXiv.org Artificial Intelligence

This paper proposes to revisit the Turing test through the concept of normality. Its core argument is that the Turing test is a test of normal intelligence as assessed by a normal judge. First, in the sense that the Turing test targets normal/average rather than exceptional human intelligence, so that successfully passing the test requires machines to "make mistakes" and display imperfect behavior just like normal/average humans. Second, in the sense that the Turing test is a statistical test where judgments of intelligence are never carried out by a single "average" judge (understood as non-expert) but always by a full jury. As such, the notion of "average human interrogator" that Turing talks about in his original paper should be understood primarily as referring to a mathematical abstraction made of the normalized aggregate of individual judgments of multiple judges. Its conclusions are twofold. First, it argues that large language models such as ChatGPT are unlikely to pass the Turing test as those models precisely target exceptional rather than normal/average human intelligence. As such, they constitute models of what it proposes to call artificial smartness rather than artificial intelligence, insofar as they deviate from the original goal of Turing for the modeling of artificial minds. Second, it argues that the objectivization of normal human behavior in the Turing test fails due to the game configuration of the test which ends up objectivizing normative ideals of normal behavior rather than normal behavior per se.


Morphological Cognition: Classifying MNIST Digits Through Morphological Computation Alone

Mertan, Alican, Cheney, Nick

arXiv.org Artificial Intelligence

With the rise of modern deep learning, neural networks have become an essential part of virtually every artificial intelligence system, making it difficult even to imagine different models for intelligent behavior. In contrast, nature provides us with many different mechanisms for intelligent behavior, most of which we have yet to replicate. One of such underinvestigated aspects of intelligence is embodiment and the role it plays in intelligent behavior. In this work, we focus on how the simple and fixed behavior of constituent parts of a simulated physical body can result in an emergent behavior that can be classified as cognitive by an outside observer. Specifically, we show how simulated voxels with fixed behaviors can be combined to create a robot such that, when presented with an image of an MNIST digit zero, it moves towards the left; and when it is presented with an image of an MNIST digit one, it moves towards the right. Such robots possess what we refer to as ``morphological cognition'' -- the ability to perform cognitive behavior as a result of morphological processes. To the best of our knowledge, this is the first demonstration of a high-level mental faculty such as image classification performed by a robot without any neural circuitry. We hope that this work serves as a proof-of-concept and fosters further research into different models of intelligence.


On the Definition of Intelligence

Ng, Kei-Sing

arXiv.org Artificial Intelligence

To engineer AGI, we should first capture the essence of intelligence in a species-agnostic form that can be evaluated, while being sufficiently general to encompass diverse paradigms of intelligent behavior, including reinforcement learning, generative models, classification, analogical reasoning, and goal-directed decision-making. We propose a general criterion based on \textit{entity fidelity}: Intelligence is the ability, given entities exemplifying a concept, to generate entities exemplifying the same concept. We formalise this intuition as \(\varepsilon\)-concept intelligence: it is \(\varepsilon\)-intelligent with respect to a concept if no chosen admissible distinguisher can separate generated entities from original entities beyond tolerance \(\varepsilon\). We present the formal framework, outline empirical protocols, and discuss implications for evaluation, safety, and generalization.


Intelligence as Computation

Brock, Oliver

arXiv.org Artificial Intelligence

This paper proposes a specific conceptualization of intelligence as computation. This conceptualization is intended to provide a unified view for all disciplines of intelligence research. Already, it unifies several conceptualizations currently under investigation, including physical, neural, embodied, morphological, and mechanical intelligences. To achieve this, the proposed conceptualization explains the differences among existing views by different computational paradigms, such as digital, analog, mechanical, or morphological computation. Viewing intelligence as a composition of computations from different paradigms, the challenges posed by previous conceptualizations are resolved. Intelligence is hypothesized as a multi-paradigmatic computation relying on specific computational principles. These principles distinguish intelligence from other, non-intelligent computations. The proposed conceptualization implies a multi-disciplinary research agenda that is intended to lead to unified science of intelligence.


The Impacts of Human-Cobot Collaboration on Perceived Cognitive Load and Usability during an Industrial Task: An Exploratory Experiment

Fournier, Étienne, Kilgus, Dorilys, Landry, Aurélie, Hmedan, Belal, Pellier, Damien, Fiorino, Humbert, Jeoffrion, Christine

arXiv.org Artificial Intelligence

Since cobots (collaborative robots) are increasingly being introduced in industrial environments, being aware of their potential positive and negative impacts on human collaborators is essential. This study guides occupational health workers by identifying the potential gains (reduced perceived time demand, number of gestures and number of errors) and concerns (the cobot takes a long time to perceive its environment, which eads to an increased completion time) associated with working with cobots. In our study, the collaboration between human and cobot during an assembly task did not negatively impact perceived cognitive load, increased completion time (but decreased perceived time demand), and decreased the number of gestures performed by participants and the number of errors made. Thus, performing the task in collaboration with a cobot improved the user's experience and performance, except for completion time, which increased. This study opens up avenues to investigate how to improve cobots to ensure the usability of the human-machine system at work.


The Turing Deception

Noever, David, Ciolino, Matt

arXiv.org Artificial Intelligence

The outlier, however, for ChatGPT is Appendix F, based on the prompt to generate variants on poetry dedicated to Turing. In this instance, the generated content bypassed Open AI's detector with high confidence as real (99.98%). In their original report [24], the authors found "detection rates of ~95% for detecting 1.5B GPT-2-generated text" and noted that "We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective." Like the evolution of ever larger language models (>100 billion), refinements also have built-in heuristics or guardrails for model execution. The Instruct-series of GPT-3 demonstrated the ability to answer questions directly without conversational meanderings. The ChatGPT includes longer-term conversational memory, such that the API can track the dialog even with leaps of narration that single API calls could not span. One can test dialogs with impersonal pronouns like "it" carrying forward in the conversation with context to previous API calls in a single session-one easily grasped example for ChatGPT's API memory as both powerful and expensive to encode for more extended conversations. As Turing himself posed the human capacity to list memories [1]: "Actual human computers really remember what they have to do Constructing instruction tables is usually described as'programming.'"


Artificial Intelligence 101 for Digital Marketing - Marji J. Sherman - NFTs, Metaverse, Social, Digital

#artificialintelligence

In elementary school, I remember seeing a Scholastic visual of what the world would look like by 2010. While we still are working on flying cars, the artist illustrated artificial intelligence (AI) somewhere in that futuristic world. Many marketers today shy away from digging deeper into AI use cases because their brand is still working out how to use essential media and basic website tools effectively. I am here to tell you that it's simpler than it sounds and can significantly impact your brand's line, especially with social media use declining. If a client asked me whether to build new social media channels or integrate AI into their existing strategy, I would lean towards AI development.


Google Engineer Claims AI Chatbot Is Sentient: Why That Matters - AI Summary

#artificialintelligence

"I want everyone to understand that I am, in fact, a person," wrote LaMDA (Language Model for Dialogue Applications) in an "interview" conducted by engineer Blake Lemoine and one of his colleagues. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient. That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them. "We attribute characteristics to machines that they do not and cannot have." He encounters this phenomenon with his and his colleagues' humanoid robot Abel, which is designed to emulate our facial expressions in order to convey emotions.